Quantized minimax estimation over Sobolev ellipsoids
نویسندگان
چکیده
منابع مشابه
Quantized minimax estimation over Sobolev ellipsoids
We formulate the notion of minimax estimation under storage or communication constraints, and prove an extension to Pinsker’s theorem for non-parametric estimation over Sobolev ellipsoids. Placing limits on the number of bits used to encode any estimator, we give tight lower and upper bounds on the excess risk due to quantization in terms of the number of bits, the signal size and the noise lev...
متن کاملAdaptive Minimax Estimation over Sparse q-Hulls
Given a dictionary of Mn initial estimates of the unknown true regression function, we aim to construct linearly aggregated estimators that target the best performance among all the linear combinations under a sparse q-norm (0 ≤ q ≤ 1) constraint on the linear coefficients. Besides identifying the optimal rates of aggregation for these lq-aggregation problems, our multi-directional (or adaptive...
متن کاملSharp Adaptive Nonparametric Testing for Sobolev Ellipsoids
Sharp Adaptive Nonparametric Testing for Sobolev Ellipsoids Michael Nussbaum (joint work with Pengsheng Ji) Consider the Gaussian white noise model in sequence space Yj = fj + n ⇠j , j = 1, 2, ... with signal f = {fj}j=1 and ⇠j ⇠ N (0, 1) independent. For some ⇢, ,M > 0, consider hypotheses of ”no signal” vs. an ellipsoid with l2-ball removed: H0 : f = 0 against Ha : f 2 ⌃( ,M) \B⇢,
متن کاملMinimax Estimation of Linear Functionals Over Nonconvex Parameter Spaces
The minimax theory for estimating linear functionals is extended to the case of a finite union of convex parameter spaces. Upper and lower bounds for the minimax risk can still be described in terms of a modulus of continuity. However in contrast to the theory for convex parameter spaces rate optimal procedures are often required to be nonlinear. A construction of such nonlinear procedures is g...
متن کاملAdaptive minimax regression estimation over sparse lq-hulls
Given a dictionary of Mn predictors, in a random design regression setting with n observations, we construct estimators that target the best performance among all the linear combinations of the predictors under a sparse `q-norm (0 ≤ q ≤ 1) constraint on the linear coefficients. Besides identifying the optimal rates of convergence, our universal aggregation strategies by model mixing achieve the...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Information and Inference: A Journal of the IMA
سال: 2017
ISSN: 2049-8764,2049-8772
DOI: 10.1093/imaiai/iax007